Recent methods demonstrate that data augmentation using counterfactual knowledge can teach models the causal structure of a task, leading to robust and generalizable models. However, such counterfactual data often has a limited scale and diversity if crowdsourced and is computationally expensive to extend to new perturbation types if generated using supervised methods. To address this, we introduce a new framework called DISCO for automatically generating high-quality counterfactual data at scale. DISCO engineers prompts to generate phrasal perturbations with a large general language model. Then, a task-specific teacher model filters the generation to distill high-quality counterfactual data. We show that learning with this counterfactual data yields a comparatively small student model that is 6% (absolute) more robust and generalizes 5% better across distributions than baselines on various challenging evaluations. This model is also 15% more sensitive in differentiating original and counterfactual examples, on three evaluation sets written by human workers and via human-AI collaboration.
translated by 谷歌翻译
Recent work has shown that large language models are capable of generating natural language reasoning steps or Chains-of-Thoughts (CoT) to answer a multi-step question when prompted to do so. This is insufficient, however, when the necessary knowledge is not available or up-to-date within a model's parameters. A straightforward approach to address this is to retrieve text from an external knowledge source using the question as a query and prepend it as context to the model's input. This, however, is also insufficient for multi-step QA where \textit{what to retrieve} depends on \textit{what has already been derived}. To address this issue we propose IRCoT, a new approach that interleaves retrieval with CoT for multi-step QA, guiding the retrieval with CoT and in turn using retrieved results to improve CoT. Our experiments with GPT3 show substantial improvements in retrieval (up to 22 points) and downstream QA (up to 16 points) over the baselines on four datasets: HotpotQA, 2WikiMultihopQA, MuSiQue, and IIRC. Notably, our method also works well for much smaller models such as T5-Flan-large (0.7B) without any additional training.
translated by 谷歌翻译
Characterizing the implicit structure of the computation within neural networks is a foundational problem in the area of deep learning interpretability. Can their inner decision process be captured symbolically in some familiar logic? We show that any transformer neural network can be translated into an equivalent fixed-size first-order logic formula which may also use majority quantifiers. The idea is to simulate transformers with highly uniform threshold circuits and leverage known theoretical connections between circuits and logic. Our findings also reveal the surprising fact that the entire transformer computation can be reduced merely to the division of two (large) integers. While our results are most pertinent for transformers, they apply equally to a broader class of neural network architectures, namely those with a fixed-depth uniform computation graph made up of standard neural net components, which includes feedforward and convolutional networks.
translated by 谷歌翻译
我们证明,可以通过恒定的深度统一阈值电路模拟输入长度中具有对数精度的变压器神经网络(以及使用输入长度中的线性空间计算的FeedForward子网络)。因此,此类变压器仅在$ \ mathsf {tc}^0 $中识别形式语言,这是由常数深度,多大小阈值电路定义的语言类。这证明了NLP中的实际主张与计算复杂性理论中的理论猜想之间的联系:“注意就是您需要的一切”(Vaswani等,2017),即,只有在所有有效地计算的情况下,变形金刚都能够进行所有有效的计算可以使用日志空间来解决问题,即$ \ mathsf l = \ mathsf p $。我们还构建了一个可以在任何输入上评估任何恒定深度阈值电路的变压器,证明变形金刚可以遵循$ \ Mathsf {tc}^0 $中表示的说明。
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译
Question-answering datasets require a broad set of reasoning skills. We show how to use question decompositions to teach language models these broad reasoning skills in a robust fashion. Specifically, we use widely available QDMR representations to programmatically create hard-to-cheat synthetic contexts for real questions in six multi-step reasoning datasets. These contexts are carefully designed to avoid reasoning shortcuts prevalent in real contexts that prevent models from learning the right skills. This results in a pretraining dataset, named TeaBReaC, containing 525K multi-step questions (with associated formal programs) covering about 900 reasoning patterns. We show that pretraining standard language models (LMs) on TeaBReaC before fine-tuning them on target datasets improves their performance by up to 13 F1 points across 4 multi-step QA datasets, with up to 21 point gain on more complex questions. The resulting models also demonstrate higher robustness, with a 5-8 F1 point improvement on two contrast sets. Furthermore, TeaBReaC pretraining substantially improves model performance and robustness even when starting with numerate LMs pretrained using recent methods (e.g., PReasM, POET). Our work thus shows how to effectively use decomposition-guided contexts to robustly teach multi-step reasoning.
translated by 谷歌翻译
调查变压器模型的推理能力,并为他们发现新的具有挑战性的任务,这是一个非常感兴趣的主题。最近的研究发现这些模型在表演自然语言表达的正式逻辑理论上表现出令人惊讶的强烈。然而,这些研究的缺点是他们没有考虑到逻辑理论,当随机均匀抽样时,不一定导致硬实例。我们提出了一种新的方法,用于创建挑战算法推理数据集,其专注于自然语言可满足性(NLSAT)问题。关键的想法是利用良好命题SAT问题的经验采样以及语言的复杂性学习的洞察力。这种方法允许我们轻松地从硬实例区分,并系统地提高Ruletaker等现有推理基准的复杂性。我们发现,鉴于足够的训练数据,当前的变压器令人惊讶地稳健地解决了产生的NLSAT基本上增加的难度问题。它们还表现出一定程度的规模不变性 - 概括到更大尺寸和范围的问题的能力。然而,我们的结果也揭示了重要的局限性:仔细的培训数据采样对于建立更大问题的模型来说至关重要,变压器模型的“有限的规模不变性”表明他们远非学习强大的演绎推理算法。
translated by 谷歌翻译
Fine Tuning Target Tasks的连续提示最近被出现为完整模型微调的紧凑替代方案。这些有前途的结果的动机,我们调查了提取离散(文本)解释的可行性,持续提示忠于他们解决的问题。在实践中,我们在通过连续提示和最近的邻离分立投影解决的任务之间的“任性”行为:我们可以找到解决任务的连续提示,同时投射到任意文本(例如,不同甚至a的定义矛盾的任务),而在最佳连续提示的非常小(2%)的边缘内,对于任务相同的相同尺寸。我们提供这种奇怪和令人惊讶的行为背后的直觉,以及广泛的实证分析量化各种参数的效果。例如,对于更大的模型大小,我们观察到更高的任性,即,我们可以发现提示更紧密地映射到任何随意的任意文本,精度较小。这些调查结果与忠实地解释模型和任务持续提示及其概括的难度有关的重要意义,为提示语言模型的未来进展提供指导。
translated by 谷歌翻译
许多现实世界问题需要综合应用采用合适的抽象,致辞认识和创造性的解决问题策略的多种推理能力。为了帮助推进AI系统实现这种能力,我们提出了一个新的推理挑战,即费银问题(FPS),这是答案只能估计的问题,因为它们的精确计算是不切实际或不可能的。例如,“如果世界上所有的冰融化,那么海平面会增加多少海平面?” FPS通常用于测验和访谈,以发出和评估人类的创造性推理能力。为AI系统做同样的事情,我们展示了两个数据集:1)来自测验和奥林匹克的1K现实世界FPS的集合; 2)一个10K的中间复杂合成FPS的银行,作为较难的真实挑战的沙箱。除问题答案对之外,数据集还包含可执行计划形式的详细解决方案,并提供支持事实,帮助监督和评估中间步骤。我们展示了甚至广泛的微调大规模语言模型在这些数据集上表现不佳,平均估计是由两个数量级的估计值。因此,我们的贡献是几个未解决的AI问题的结晶,以至于我们希望将促进可以推理的建筑系统进一步前进。
translated by 谷歌翻译
基于知识的视觉问题的问题涉及除了图像内容之外还涉及需要外部知识的问题。这些知识通常有各种形式,包括视觉,文本和致辞知识。使用更多知识来源,增加了检索更无关紧要或嘈杂的事实的可能性,使其充实并找到答案的挑战。为了解决这一挑战,我们使用外部知识(MAVEX)提出了多模态答案验证,其中该想法是根据答案特定知识检索验证一组有希望的答案候选者。而不是在大多数现有方法中搜索大量不相关的事实中的答案,Mavex旨在学习如何从嘈杂来源中提取相关知识,这是对每个答复候选者的信任,以及如何使用候选者那个来源。除了以维基百科句子和概念概念的形式之外,我们的多模态设置是第一个利用外部视觉知识(使用谷歌搜索的图像)。我们的实验与OK-VQA是一个具有挑战性的知识VQA数据集,证明了MAVEX实现了新的最先进的结果。我们的代码可在https://github.com/jialinwu17/mavex提供
translated by 谷歌翻译